Torch Dynamo allow

您所在的位置:网站首页 graph for Torch Dynamo allow

Torch Dynamo allow

#Torch Dynamo allow| 来源: 网络整理| 查看: 265

Jumping in here because I'm having an issue with dynamo and I want to check first before opening a duplicate.

I'm trying to get the compiled graph to not break on my custom op (it's custom autograd functions that are binded to C++/CUDA functions we wrote), and when I wrap them in allow_in_graph and add the torch compile decorator to the forward function calling them, they end up failing because they get passed FakeTensor types. If I get the C functions to ignore tensors whose data pointers are null, it will avoid the error, but of course it will ignore everything passed to the compiled model and just return empty tensors.

Is that somewhat similar to the issue you're having @pallavides ?

I guess my question is how would one go about allowing FakeTensor inputs to C++/CUDA kernels, because I couldn't really find any documentation on it, nor was I able to figure anything out by searching issues, PRs, and ATen's source code.

If I'm off and this is completely unrelated I'll start a separate issue.



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3